Goto

Collaborating Authors

 heuristic search


Consistency-based Abductive Reasoning over Perceptual Errors of Multiple Pre-trained Models in Novel Environments

Leiva, Mario, Ngu, Noel, Kricheli, Joshua Shay, Taparia, Aditya, Senanayake, Ransalu, Shakarian, Paulo, Bastian, Nathaniel, Corcoran, John, Simari, Gerardo

arXiv.org Artificial Intelligence

The deployment of pre-trained perception models in novel environments often leads to performance degradation due to distributional shifts. Although recent artificial intelligence approaches for metacognition use logical rules to characterize and filter model errors, improving precision often comes at the cost of reduced recall. This paper addresses the hypothesis that leveraging multiple pre-trained models can mitigate this recall reduction. We formulate the challenge of identifying and managing conflicting predictions from various models as a consistency-based abduction problem, building on the idea of abductive learning (ABL) but applying it to test-time instead of training. The input predictions and the learned error detection rules derived from each model are encoded in a logic program. We then seek an abductive explanation--a subset of model predictions--that maximizes prediction coverage while ensuring the rate of logical inconsistencies (derived from domain constraints) remains below a specified threshold. We propose two algorithms for this knowledge representation task: an exact method based on Integer Programming (IP) and an efficient Heuristic Search (HS). Through extensive experiments on a simulated aerial imagery dataset featuring controlled, complex distributional shifts, we demonstrate that our abduction-based framework outperforms individual models and standard ensemble baselines, achieving, for instance, average relative improvements of approximately 13.6\% in F1-score and 16.6\% in accuracy across 15 diverse test datasets when compared to the best individual model. Our results validate the use of consistency-based abduction as an effective mechanism to robustly integrate knowledge from multiple imperfect models in challenging, novel scenarios.


Enhancing Cognitive Robotics with Commonsense through LLM-Generated Preconditions and Subgoals

Bachner, Ohad, Gamliel, Bar

arXiv.org Artificial Intelligence

Autonomous robots are increasingly deployed in dynamic and unstructured environments, where they must plan and execute complex tasks under uncertainty. Classical planning approaches, typically modeled in PDDL and solved with heuristic search, provide a principled foundation for task planning (Edelkamp and Schr odl, 2011; Geffner and Bonet, 2013). However, these methods rely on explicit domain models that enumerate preconditions and effects of actions. In practice, such models often omit implicit commonsense knowledge, for example, that a container must be upright before pouring, or that water must be boiled before making tea. The absence of such knowledge can lead to plans that are logically correct but physically invalid. Cognitive robotics research seeks to bridge symbolic reasoning with robot perception and control (Ghallab et al., 2004). While significant progress has been made in integrating planning with motion control and execution, robots still lack the ability to autonomously infer commonsense constraints that humans consider obvious. Large Language Models (LLMs), trained on massive corpora of human knowledge, present a promising avenue for addressing this gap. LLMs can generate likely preconditions, subgoals, and contextual constraints from natural language task descriptions, potentially enriching classical planning models. 1



Bidirectional Bounded-Suboptimal Heuristic Search with Consistent Heuristics

Shperberg, Shahaf S., Morad, Natalie, Siag, Lior, Felner, Ariel, Atzmon, Dor

arXiv.org Artificial Intelligence

Recent advancements in bidirectional heuristic search have yielded significant theoretical insights and novel algorithms. While most previous work has concentrated on optimal search methods, this paper focuses on bounded-suboptimal bidirectional search, where a bound on the suboptimality of the solution cost is specified. We build upon the state-of-the-art optimal bidirectional search algorithm, BAE*, designed for consistent heuristics, and introduce several variants of BAE* specifically tailored for the bounded-suboptimal context. Through experimental evaluation, we compare the performance of these new variants against other bounded-suboptimal bidirectional algorithms as well as the standard weighted A* algorithm. Our results demonstrate that each algorithm excels under distinct conditions, highlighting the strengths and weaknesses of each approach.



Review for NeurIPS paper: Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces

Neural Information Processing Systems

Additional Feedback: The motivating example could be explained more clearly. How exactly is the heuristic information incorporated into the search for a_dir? If a simulator is available, one typically wouldn't use a model-free algorithm like REINFORCE. A major benefit of REINFORCE is that it can do a Monte Carlo rollout and have an estimate of the direction to improve the policy without needing a simulator or a model of the environment. Once a simulator is added, it changes the structure of the problem such that different solution methods become available (i.e., MCTS).


On Parallel External-Memory Bidirectional Search

Siag, Lior, Shperberg, Shahaf S., Felner, Ariel, Sturtevant, Nathan R.

arXiv.org Artificial Intelligence

Parallelization and External Memory (PEM) techniques have significantly enhanced the capabilities of search algorithms when solving large-scale problems. Previous research on PEM has primarily centered on unidirectional algorithms, with only one publication on bidirectional PEM that focuses on the meet-in-the-middle (MM) algorithm. Building upon this foundation, this paper presents a framework that integrates both uni- and bi-directional best-first search algorithms into this framework. We then develop a PEM variant of the state-of-the-art bidirectional heuristic search (BiHS) algorithm BAE* (PEM-BAE*). As previous work on BiHS did not focus on scaling problem sizes, this work enables us to evaluate bidirectional algorithms on hard problems. Empirical evaluation shows that PEM-BAE* outperforms the PEM variants of A* and the MM algorithm, as well as a parallel variant of IDA*. These findings mark a significant milestone, revealing that bidirectional search algorithms clearly outperform unidirectional search algorithms across several domains, even when equipped with state-of-the-art heuristics.


Asymptotically Optimal Sampling-Based Path Planning Using Bidirectional Guidance Heuristic

Wang, Yi, Mu, Bingxian

arXiv.org Artificial Intelligence

This paper introduces Bidirectional Guidance Informed Trees (BIGIT*),~a new asymptotically optimal sampling-based motion planning algorithm. Capitalizing on the strengths of \emph{meet-in-the-middle} property in bidirectional heuristic search with a new lazy strategy, and uniform-cost search, BIGIT* constructs an implicitly bidirectional preliminary motion tree on an implicit random geometric graph (RGG). This efficiently tightens the informed search region, serving as an admissible and accurate bidirectional guidance heuristic. This heuristic is subsequently utilized to guide a bidirectional heuristic search in finding a valid path on the given RGG. Experiments show that BIGIT* outperforms the existing informed sampling-based motion planners both in faster finding an initial solution and converging to the optimum on simulated abstract problems in $\mathbb{R}^{16}$. Practical drone flight path planning tasks across a campus also verify our results.


Deep Memory Search: A Metaheuristic Approach for Optimizing Heuristic Search

Hedar, Abdel-Rahman, Abdel-Hakim, Alaa E., Deabes, Wael, Alotaibi, Youseef, Bouazza, Kheir Eddine

arXiv.org Artificial Intelligence

Metaheuristic search methods have proven to be essential tools for tackling complex optimization challenges, but their full potential is often constrained by conventional algorithmic frameworks. In this paper, we introduce a novel approach called Deep Heuristic Search (DHS), which models metaheuristic search as a memory-driven process. DHS employs multiple search layers and memory-based exploration-exploitation mechanisms to navigate large, dynamic search spaces. By utilizing model-free memory representations, DHS enhances the ability to traverse temporal trajectories without relying on probabilistic transition models. The proposed method demonstrates significant improvements in search efficiency and performance across a range of heuristic optimization problems.


Parallel Strategies for Best-First Generalized Planning

Fernández-Alburquerque, Alejandro, Segovia-Aguas, Javier

arXiv.org Artificial Intelligence

In recent years, there has been renewed interest in closing the performance gap between state-of-the-art planning solvers and generalized planning (GP), a research area of AI that studies the automated synthesis of algorithmic-like solutions capable of solving multiple classical planning instances. One of the current advancements has been the introduction of Best-First Generalized Planning (BFGP), a GP algorithm based on a novel solution space that can be explored with heuristic search, one of the foundations of modern planners. This paper evaluates the application of parallel search techniques to BFGP, another critical component in closing the performance gap. We first discuss why BFGP is well suited for parallelization and some of its differentiating characteristics from classical planners. Then, we propose two simple shared-memory parallel strategies with good scaling with the number of cores.